Parallel, Probabilistic, Self-organizing, Hierarchical Neural Networks
نویسندگان
چکیده
Valafar, Fararnarz. Ph.D., Purdue University, August 1993. PARALLEL PROBABILISTIC SELF-ORGANIZING HIERARCHICAL NEURAL NETWORKS. Major Professor: Okan K. Ersoy. A new neural network architecture called the Parallel Probabilistic Self-organizing Hierarchical Neural Network (PPSHNN) is introduced. The PPSHNN is designed to solve complex classification problems, by dividing the input vector space into regions, and by performing classification on those regions. It consists of several modules which operate in a hierarchically during learning and in parallel during testing. Each module has the task of classification for a region of the input information space as well as the task of participating in the formation of these regions through postand pre-rejection schemes. The decomposition into regions is performed in a manner that makes classification easier on each of h e regions. The post-~jector submodule performs a bitwise statistical analysis and detection of hard to classify vectors. The pre-rejector module accepts only those classes for which the module is trained and rejects others. The PNS module is developed as a variation of the PPSHNN module. If delta rule networks are used to build the submodules of PNS, then it uses piecewise linear boundaries to divide the problem space into regions. The PNS module has a high classification accuracy while it remains relatively inexpensive. The submodules of PNS are fractile in nature, meaning that each such unit may itself consist of a number of PNS modules. The PNS module is discussed as the building block for the synthesis of PPSHNN. . The SIMD version of PPSHNN is implemented on MASPAR with 16k processors. On all the experiments performed, this network has outperformed the previously used networks in terms of accuracy of classification and speed.
منابع مشابه
Hierarchical Clustering Analysis with SOM Networks
This work presents a neural network model for the clustering analysis of data based on Self Organizing Maps (SOM). The model evolves during the training stage towards a hierarchical structure according to the input requirements. The hierarchical structure symbolizes a specialization tool that provides refinements of the classification process. The structure behaves like a single map with differ...
متن کاملApproximation with neural networks : Between local and global
We investigate neural network based approximation methods. These methods depend on the locality of the basis functions. After discussing local and global basis functions, we propose a a multi-resolution hierarchical method. The various resolutions are stored at various levels in a tree. At the root of the tree, a global approximation is kept; the leafs store the learning samples themselves. Int...
متن کاملThe growing hierarchical self-organizing map: exploratory analysis of high-dimensional data
The self-organizing map (SOM) is a very popular unsupervised neural-network model for the analysis of high-dimensional input data as in data mining applications. However, at least two limitations have to be noted, which are related to the static architecture of this model as well as to the limited capabilities for the representation of hierarchical relations of the data. With our novel growing ...
متن کاملComparing Performance of Neural Networks Recognizing Machine Generated Characters
Neural networks are a popular tool in the area of pattern recognition. However, since a very large number of neural network architectures exist, it has not been established which one is the most efficient. In this paper we compare the performance of three neural network architectures: Kohonen’s self-organizing network, probabilistic neural network, and a modified backpropagation applied to a si...
متن کاملProbabilistic self-organizing maps for qualitative data
We present a self-organizing map model to study qualitative data (also called categorical data). It is based on a probabilistic framework which does not assume any prespecified distribution of the input data. Stochastic approximation theory is used to develop a learning rule that builds an approximation of a discrete distribution on each unit. This way, the internal structure of the input datas...
متن کامل